Adapting object detectors learned with sufficient supervision to novel classes under low data regimes is charming yet challenging. In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance, i.e., holistic pre-training on base classes, then partial fine-tuning in a balanced setting with all classes. Since unlabeled instances are suppressed as backgrounds in the base training phase, the learned RPN is prone to produce biased proposals for novel instances, resulting in dramatic performance degradation. Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes. In this paper, we introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head by recycling its localization ability endowed in base training and enriching high-quality positive samples for semantic fine-tuning. Specifically, we sample proposals based on the base proposal statistics to calibrate the distribution bias and impose additional localization and classification losses upon the sampled proposals for fast expanding the base detector to novel classes. Experiments on the commonly used Pascal VOC and MS COCO datasets with explicit state-of-the-art performances justify the efficacy of our PDC for FSOD. Code is available at github.com/Bohao-Lee/PDC.
translated by 谷歌翻译
神经文本排名模型已经见证了显着的进步,并越来越多地在实践中部署。不幸的是,它们还继承了一般神经模型的对抗性脆弱性,这些神经模型已被检测到,但仍未被先前的研究所忽视。此外,Blackhat SEO可能会利用继承的对抗性漏洞来击败受保护的搜索引擎。在这项研究中,我们提出了对黑盒神经通道排名模型的模仿对抗攻击。我们首先表明,可以通过列举关键查询/候选者,然后训练排名模仿模型来透明和模仿目标段落排名模型。利用排名模仿模型,我们可以精心操纵排名结果并将操纵攻击转移到目标排名模型。为此,我们提出了一种由成对目标函数授权的基于创新的基于梯度的攻击方法,以产生对抗性触发器,该触发器会导致有预谋的混乱,而具有很少的令牌。为了配备触发器的伪装,我们将下一个句子预测损失和语言模型流利度限制添加到目标函数中。对通过排名的实验结果证明了对各种SOTA神经排名模型的排名模仿攻击模型和对抗触发器的有效性。此外,各种缓解分析和人类评估表明,在面对潜在的缓解方法时,伪装的有效性。为了激励其他学者进一步研究这一新颖和重要的问题,我们将实验数据和代码公开可用。
translated by 谷歌翻译
虽然注释大量的数据以满足复杂的学习模型,但对于许多现实世界中的应用程序可能会过于良好。主动学习(AL)和半监督学习(SSL)是两个有效但经常被隔离的方法,可以减轻渴望数据的问题。最近的一些研究探索了将AL和SSL相结合以更好地探测未标记数据的潜力。但是,几乎所有这些当代的SSL-AL作品都采用了简单的组合策略,忽略了SSL和AL的固有关系。此外,在处理大规模,高维数据集时,其他方法则遭受高计算成本。通过标记数据的行业实践的激励,我们提出了一种基于创新的基于不一致的虚拟对抗性积极学习(理想)算法,以进一步研究SSL-AL的潜在优势,并实现Al和SSL的相互增强,即SSL,即SSL宣传标签信息,以使标签信息无标记的样本信息并为Al提供平滑的嵌入,而AL排除了具有不一致的预测和相当不确定性的样品。我们通过不同粒度的增强策略(包括细粒度的连续扰动探索和粗粒数据转换)来估计未标记的样品的不一致。在文本和图像域中,广泛的实验验证了所提出的算法的有效性,并将其与最先进的基线进行了比较。两项实际案例研究可视化应用和部署所提出的数据采样算法的实际工业价值。
translated by 谷歌翻译
对话摘要已被广泛研究和应用,其中,先前的作品主要集中在探索卓越的模型结构方面,以对准输入对话和输出摘要。然而,对于专业对话(例如,法律辩论和医学诊断),语义/统计对齐可能几乎不会填补输入对话话语话语和外部知识的摘要输出之间的逻辑/事实差距。在本文中,我们主要研究了非预介绍和预用环境下对话检验摘要(DIS)的事实不一致问题。创新的端到端对话摘要生成框架是有两个辅助任务:预期事实方面正规化(EFAR)和缺少事实实体歧视(MFED)。综合实验表明,该模型可以以准确的事实方面的覆盖率来产生更可读的总结,以及通知用户从输入对话中检测到的潜在缺失事实以获得进一步的人为干预。
translated by 谷歌翻译
注入人类知识是加速加强学习(RL)的有效途径。但是,这些方法是缺乏缺陷的。本文介绍了我们发现的抽象前瞻性模型(思想游戏(TG))与转移学习(TL)相结合是有效的方式。我们将星际争霸II作为我们的学习环境。在设计的TG的帮助下,该代理可以在64x64地图上学习99%的速率,在一个商业机器中仅使用1.08小时的1级内置AI。我们还表明TG方法并不像被认为是限制性的。它可以使用粗略设计的TGS,并且在环境变化时也可以很有用。与以前的基于模型的RL相比,我们显示TG更有效。我们还提出了一种TG假设,其赋予TG不同保真度水平的影响。对于具有不等状态和行动空间的真实游戏,我们提出了一种新颖的XFRNET,其中有用性在验证有用性,同时达到欺骗级别-10 AI的90%的赢利。我们认为TG方法可能会在利用人类知识的进一步研究中进一步研究。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译